Assignment 2: Convolution
The assignment focuses on applying convolutional networks (convnets) to image classification using the Cats & Dogs dataset. It explores the relationship between training sample size and performance, comparing training models from scratch versus using a pretrained network. Techniques like data augmentation and regularization are used to reduce overfitting. You first train a model from scratch with different sample sizes, then repeat the process using a pretrained network like VGG16. Your code correctly addresses all the questions by implementing models with varying sample sizes, utilizing optimization techniques, and comparing performances in each step.
!ls
'Assignment3_Hceruku_Convolution (1).ipynb' 'Assignment3_hcheruku_Convolution (2).ipynb' 'Assignment3_Hceruku_Convolution (2).ipynb' Assignment3_hcheruku_Convolution.ipynb Assignment3_Hceruku-Convolution.html dogs-vs-cats Assignment3_Hceruku-Convolution.ipynb dogs-vs-cats.zip Assignment3_Hceruku_Convolution.ipynb dogs-vs-cats.zipuryq0nbq.part 'Assignment3_hcheruku_Convolution (1).ipynb' sample_data
!pip install -U gdown
# Replace 'your_file_id' with your actual file ID from the Google Drive link
file_id = '1L-kq2QQDrQrwl0PCgiP3Vkay0GdWGfi5'
gdown_url = f"https://drive.google.com/uc?id={file_id}"
# Download the file
!gdown {gdown_url}
# If the file is a zip, you can unzip it
import zipfile
# Unzipping the dataset (assuming the file is downloaded as 'dogs-vs-cats.zip')
with zipfile.ZipFile('dogs-vs-cats.zip', 'r') as zip_ref:
zip_ref.extractall('/content/dogs-vs-cats')
# Check the contents
import os
extracted_dir = '/content/dogs-vs-cats'
print(os.listdir(extracted_dir))
Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (5.2.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown) (4.12.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown) (3.16.1)
Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown) (2.32.3)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from gdown) (4.66.5)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown) (2.6)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown) (1.7.1)
Downloading...
From (original): https://drive.google.com/uc?id=1L-kq2QQDrQrwl0PCgiP3Vkay0GdWGfi5
From (redirected): https://drive.google.com/uc?id=1L-kq2QQDrQrwl0PCgiP3Vkay0GdWGfi5&confirm=t&uuid=0302add2-f82e-4237-851d-ed0c944bed59
To: /content/dogs-vs-cats.zip
27% 228M/852M [00:03<00:10, 56.8MB/s]Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 748, in _error_catcher
yield
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 873, in _raw_read
data = self._fp_read(amt, read1=read1) if not fp_closed else b""
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 856, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File "/usr/lib/python3.10/http/client.py", line 466, in read
s = self.fp.read(amt)
File "/usr/lib/python3.10/socket.py", line 705, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.10/ssl.py", line 1303, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.10/ssl.py", line 1159, in read
return self._sslobj.read(len, buffer)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/gdown", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/gdown/__main__.py", line 172, in main
download(
File "/usr/local/lib/python3.10/dist-packages/gdown/download.py", line 368, in download
for chunk in res.iter_content(chunk_size=CHUNK_SIZE):
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 820, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 1060, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 949, in read
data = self._raw_read(amt)
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 872, in _raw_read
with self._error_catcher():
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.10/dist-packages/urllib3/response.py", line 788, in _error_catcher
self._original_response.close()
File "/usr/lib/python3.10/http/client.py", line 419, in close
super().close() # set "closed" flag
KeyboardInterrupt
27% 232M/852M [00:03<00:10, 58.8MB/s]
^C
--------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-20-56824c15248d> in <cell line: 14>() 13 # Unzipping the dataset (assuming the file is downloaded as 'dogs-vs-cats.zip') 14 with zipfile.ZipFile('dogs-vs-cats.zip', 'r') as zip_ref: ---> 15 zip_ref.extractall('/content/dogs-vs-cats') 16 17 # Check the contents /usr/lib/python3.10/zipfile.py in extractall(self, path, members, pwd) 1658 1659 for zipinfo in members: -> 1660 self._extract_member(zipinfo, path, pwd) 1661 1662 @classmethod /usr/lib/python3.10/zipfile.py in _extract_member(self, member, targetpath, pwd) 1713 with self.open(member, pwd=pwd) as source, \ 1714 open(targetpath, "wb") as target: -> 1715 shutil.copyfileobj(source, target) 1716 1717 return targetpath /usr/lib/python3.10/shutil.py in copyfileobj(fsrc, fdst, length) 196 if not buf: 197 break --> 198 fdst_write(buf) 199 200 def _samefile(src, dst): KeyboardInterrupt:
Copying images to training, validation, and test directories
from google.colab import files
uploaded = files.upload() # You can upload your .ipynb file here
Saving Assignment3_hcheruku_Convolution.ipynb to Assignment3_hcheruku_Convolution (2).ipynb
import os
print(os.listdir('/content'))
['.config', 'Assignment3_hcheruku_Convolution (1).ipynb', 'dogs-vs-cats', 'dogs-vs-cats.zip', 'Assignment3_hcheruku_Convolution.ipynb', 'Assignment3_Hceruku-Convolution.html', 'Assignment3_hcheruku_Convolution (2).ipynb', 'Assignment3_Hceruku-Convolution.ipynb', 'Assignment3_Hceruku_Convolution (1).ipynb', 'dogs-vs-cats.zipuryq0nbq.part', 'Assignment3_Hceruku_Convolution.ipynb', 'Assignment3_Hceruku_Convolution (2).ipynb', 'sample_data']
import nbformat
from nbconvert import HTMLExporter
def convert_ipynb_to_html(input_file, output_file):
# Load the notebook
with open(input_file, 'r') as f:
notebook_content = nbformat.read(f, as_version=4)
# Initialize the HTML exporter
html_exporter = HTMLExporter()
# Convert the notebook to HTML
(body, resources) = html_exporter.from_notebook_node(notebook_content)
# Save the HTML output to a file
with open(output_file, 'w') as f:
f.write(body)
# Define input and output paths for the .ipynb and .html files
input_ipynb = '/content/Assignment3_hcheruku_Convolution (2).ipynb'
output_html = '/content/Assignment3_hcheruku-Convolution (2).html'
# Convert the notebook to HTML
convert_ipynb_to_html(input_ipynb, output_html)
from google.colab import files
files.download(output_html)
!unzip -qq dogs-vs-cats.zip
!unzip -qq train.zip
import os, shutil, pathlib
original_dir = pathlib.Path("train")
new_base_dir = pathlib.Path("cats_vs_dogs")
def make_subset(subset_name, start_index, end_index):
for category in ("cat", "dog"):
dir = new_base_dir / subset_name / category
os.makedirs(dir)
fnames = [f"{category}.{i}.jpg" for i in range(start_index, end_index)]
for fname in fnames:
shutil.copyfile(src=original_dir / fname,
dst=dir / fname)
Let us train a model from scratch. The model 1 has a Training sample of 1000, a Validation sample of 500, and a Test sample of 500.
Techniques: Data augmentation, dropout, and regularization. • Performance: Achieved 66.6% accuracy. • Key Insight: For small datasets, data augmentation helps prevent overfitting, but performance is limited.
from tensorflow.keras.utils import image_dataset_from_directory
make_subset("train", start_index=0, end_index=1000)
make_subset("validation", start_index=1000, end_index=1500)
make_subset("test", start_index=1500, end_index=2000)
train_dataset = image_dataset_from_directory(
new_base_dir / "train",
image_size=(180, 180),
batch_size=32)
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation",
image_size=(180, 180),
batch_size=32)
test_dataset = image_dataset_from_directory(
new_base_dir / "test",
image_size=(180, 180),
batch_size=32)
import numpy as np
import tensorflow as tf
random_numbers = np.random.normal(size=(1000, 16))
dataset = tf.data.Dataset.from_tensor_slices(random_numbers)
for i, element in enumerate(dataset):
print(element.shape)
if i >= 2:
break
batched_dataset = dataset.batch(32)
for i, element in enumerate(batched_dataset):
print(element.shape)
if i >= 2:
break
reshaped_dataset = dataset.map(lambda x: tf.reshape(x, (4, 4)))
for i, element in enumerate(reshaped_dataset):
print(element.shape)
if i >= 2:
break
for data_batch, labels_batch in train_dataset:
print("data batch shape:", data_batch.shape)
print("labels batch shape:", labels_batch.shape)
break
Found 2000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. (16,) (16,) (16,) (32, 16) (32, 16) (32, 16) (4, 4) (4, 4) (4, 4) data batch shape: (32, 180, 180, 3) labels batch shape: (32,)
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.keras",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
test_model = keras.models.load_model("convnet_from_scratch.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
Model: "functional_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ input_layer_1 (InputLayer) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ rescaling (Rescaling) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d (Conv2D) │ (None, 178, 178, 32) │ 896 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d (MaxPooling2D) │ (None, 89, 89, 32) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_1 (Conv2D) │ (None, 87, 87, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_1 (MaxPooling2D) │ (None, 43, 43, 64) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_2 (Conv2D) │ (None, 41, 41, 128) │ 73,856 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_2 (MaxPooling2D) │ (None, 20, 20, 128) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_3 (Conv2D) │ (None, 18, 18, 256) │ 295,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_3 (MaxPooling2D) │ (None, 9, 9, 256) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_4 (Conv2D) │ (None, 7, 7, 256) │ 590,080 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ flatten (Flatten) │ (None, 12544) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dense (Dense) │ (None, 1) │ 12,545 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 991,041 (3.78 MB)
Trainable params: 991,041 (3.78 MB)
Non-trainable params: 0 (0.00 B)
Epoch 1/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 16s 150ms/step - accuracy: 0.5083 - loss: 0.7501 - val_accuracy: 0.5000 - val_loss: 0.7291 Epoch 2/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 52ms/step - accuracy: 0.5166 - loss: 0.6952 - val_accuracy: 0.5010 - val_loss: 0.6913 Epoch 3/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 103ms/step - accuracy: 0.5387 - loss: 0.6946 - val_accuracy: 0.5050 - val_loss: 0.7105 Epoch 4/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 7s 58ms/step - accuracy: 0.5764 - loss: 0.6792 - val_accuracy: 0.5920 - val_loss: 0.6643 Epoch 5/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 79ms/step - accuracy: 0.6288 - loss: 0.6546 - val_accuracy: 0.5880 - val_loss: 0.6501 Epoch 6/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 65ms/step - accuracy: 0.6470 - loss: 0.6328 - val_accuracy: 0.6690 - val_loss: 0.6227 Epoch 7/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 53ms/step - accuracy: 0.6776 - loss: 0.6085 - val_accuracy: 0.6600 - val_loss: 0.6202 Epoch 8/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 60ms/step - accuracy: 0.6935 - loss: 0.5755 - val_accuracy: 0.6740 - val_loss: 0.5762 Epoch 9/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 67ms/step - accuracy: 0.7355 - loss: 0.5624 - val_accuracy: 0.7010 - val_loss: 0.5582 Epoch 10/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 52ms/step - accuracy: 0.7770 - loss: 0.4796 - val_accuracy: 0.6860 - val_loss: 0.6051 Epoch 11/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 59ms/step - accuracy: 0.7786 - loss: 0.4593 - val_accuracy: 0.7390 - val_loss: 0.5545 Epoch 12/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 101ms/step - accuracy: 0.8226 - loss: 0.4066 - val_accuracy: 0.7040 - val_loss: 0.5983 Epoch 13/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 57ms/step - accuracy: 0.8356 - loss: 0.3735 - val_accuracy: 0.6660 - val_loss: 0.6468 Epoch 14/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 52ms/step - accuracy: 0.8737 - loss: 0.3205 - val_accuracy: 0.6990 - val_loss: 0.7627 Epoch 15/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 82ms/step - accuracy: 0.9051 - loss: 0.2585 - val_accuracy: 0.7020 - val_loss: 0.7260 Epoch 16/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 8s 52ms/step - accuracy: 0.9220 - loss: 0.2001 - val_accuracy: 0.7130 - val_loss: 0.9330 Epoch 17/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 59ms/step - accuracy: 0.9395 - loss: 0.1544 - val_accuracy: 0.7070 - val_loss: 1.1664 Epoch 18/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 72ms/step - accuracy: 0.9350 - loss: 0.1544 - val_accuracy: 0.6950 - val_loss: 1.4193 Epoch 19/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 53ms/step - accuracy: 0.9641 - loss: 0.0978 - val_accuracy: 0.7020 - val_loss: 1.4339 Epoch 20/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 64ms/step - accuracy: 0.9730 - loss: 0.0759 - val_accuracy: 0.6940 - val_loss: 1.5138 Epoch 21/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 66ms/step - accuracy: 0.9653 - loss: 0.1174 - val_accuracy: 0.7000 - val_loss: 1.3814 Epoch 22/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 52ms/step - accuracy: 0.9825 - loss: 0.0573 - val_accuracy: 0.7000 - val_loss: 1.6426 Epoch 23/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 7s 85ms/step - accuracy: 0.9746 - loss: 0.0795 - val_accuracy: 0.7150 - val_loss: 1.6776 Epoch 24/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 63ms/step - accuracy: 0.9748 - loss: 0.0895 - val_accuracy: 0.7170 - val_loss: 1.8843 Epoch 25/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 52ms/step - accuracy: 0.9747 - loss: 0.0742 - val_accuracy: 0.7190 - val_loss: 1.7543 Epoch 26/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 52ms/step - accuracy: 0.9911 - loss: 0.0352 - val_accuracy: 0.7110 - val_loss: 1.8407 Epoch 27/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 100ms/step - accuracy: 0.9896 - loss: 0.0365 - val_accuracy: 0.7190 - val_loss: 2.1240 Epoch 28/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 59ms/step - accuracy: 0.9810 - loss: 0.0652 - val_accuracy: 0.7170 - val_loss: 2.3328 Epoch 29/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 52ms/step - accuracy: 0.9828 - loss: 0.0596 - val_accuracy: 0.7130 - val_loss: 2.0968 Epoch 30/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 8s 92ms/step - accuracy: 0.9937 - loss: 0.0260 - val_accuracy: 0.7220 - val_loss: 2.1410 Epoch 31/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 60ms/step - accuracy: 0.9874 - loss: 0.0383 - val_accuracy: 0.7130 - val_loss: 2.2308 Epoch 32/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 52ms/step - accuracy: 0.9915 - loss: 0.0350 - val_accuracy: 0.7120 - val_loss: 2.5090 Epoch 33/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 80ms/step - accuracy: 0.9852 - loss: 0.0515 - val_accuracy: 0.7090 - val_loss: 2.3214 Epoch 34/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 65ms/step - accuracy: 0.9866 - loss: 0.0487 - val_accuracy: 0.6890 - val_loss: 2.5273 Epoch 35/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 57ms/step - accuracy: 0.9909 - loss: 0.0457 - val_accuracy: 0.6810 - val_loss: 3.4120 Epoch 36/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 58ms/step - accuracy: 0.9824 - loss: 0.0750 - val_accuracy: 0.6910 - val_loss: 2.9173 Epoch 37/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 70ms/step - accuracy: 0.9939 - loss: 0.0236 - val_accuracy: 0.7020 - val_loss: 2.8482 Epoch 38/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 52ms/step - accuracy: 0.9951 - loss: 0.0188 - val_accuracy: 0.6960 - val_loss: 3.3014 Epoch 39/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 58ms/step - accuracy: 0.9976 - loss: 0.0048 - val_accuracy: 0.7140 - val_loss: 3.3526 Epoch 40/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 7s 96ms/step - accuracy: 0.9922 - loss: 0.0383 - val_accuracy: 0.7080 - val_loss: 2.8822 Epoch 41/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 8s 53ms/step - accuracy: 0.9988 - loss: 0.0062 - val_accuracy: 0.7050 - val_loss: 3.3752 Epoch 42/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 7s 82ms/step - accuracy: 0.9830 - loss: 0.0562 - val_accuracy: 0.6960 - val_loss: 3.5315 Epoch 43/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 63ms/step - accuracy: 0.9928 - loss: 0.0396 - val_accuracy: 0.7140 - val_loss: 3.2864 Epoch 44/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 52ms/step - accuracy: 0.9994 - loss: 0.0026 - val_accuracy: 0.6970 - val_loss: 3.5313 Epoch 45/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 67ms/step - accuracy: 0.9845 - loss: 0.0632 - val_accuracy: 0.7010 - val_loss: 3.3752 Epoch 46/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 5s 72ms/step - accuracy: 0.9957 - loss: 0.0272 - val_accuracy: 0.7170 - val_loss: 3.2609 Epoch 47/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 4s 58ms/step - accuracy: 0.9919 - loss: 0.0216 - val_accuracy: 0.7100 - val_loss: 3.4392 Epoch 48/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 3s 53ms/step - accuracy: 0.9897 - loss: 0.0478 - val_accuracy: 0.7130 - val_loss: 3.1520 Epoch 49/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 6s 94ms/step - accuracy: 0.9866 - loss: 0.0418 - val_accuracy: 0.7080 - val_loss: 3.6170 Epoch 50/50 63/63 ━━━━━━━━━━━━━━━━━━━━ 8s 52ms/step - accuracy: 0.9868 - loss: 0.0722 - val_accuracy: 0.7100 - val_loss: 4.0600 32/32 ━━━━━━━━━━━━━━━━━━━━ 2s 38ms/step - accuracy: 0.6798 - loss: 0.6485 Test accuracy: 0.700
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
For the second model we are increasing training sample and keeping validation sample of 500, and a test sample of 500.
from tensorflow.keras.utils import image_dataset_from_directory
make_subset("train_2", start_index=0, end_index=3000)
make_subset("validation_2", start_index=3000, end_index=3500)
make_subset("test_2", start_index=3500, end_index=4000)
train_dataset = image_dataset_from_directory(
new_base_dir / "train_2",
image_size=(180, 180),
batch_size=32)
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation_2",
image_size=(180, 180),
batch_size=32)
test_dataset = image_dataset_from_directory(
new_base_dir / "test_2",
image_size=(180, 180),
batch_size=32)
Found 6000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Found 1000 files belonging to 2 classes.
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from keras.callbacks import EarlyStopping
from keras import regularizers
# Define early_stopping_monitor
early_stopping_monitor = EarlyStopping(patience=10)
# Data augmentation layer
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
# Visualizing some augmented images
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
# Model architecture
inputs = keras.Input(shape=(180, 180, 3))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu", kernel_regularizer=regularizers.l2(0.01))(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
# Compile the model
model.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
metrics=["accuracy"])
# Define callbacks
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.keras",
save_best_only=True,
monitor="val_loss"), early_stopping_monitor
]
# Train the model
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks
)
# Evaluate the model
test_model = keras.models.load_model("convnet_from_scratch.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
Model: "functional_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ input_layer_7 (InputLayer) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ rescaling_3 (Rescaling) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_13 (Conv2D) │ (None, 178, 178, 32) │ 896 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_12 (MaxPooling2D) │ (None, 89, 89, 32) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_14 (Conv2D) │ (None, 87, 87, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_13 (MaxPooling2D) │ (None, 43, 43, 64) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_15 (Conv2D) │ (None, 41, 41, 128) │ 73,856 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_14 (MaxPooling2D) │ (None, 20, 20, 128) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_16 (Conv2D) │ (None, 18, 18, 256) │ 295,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_15 (MaxPooling2D) │ (None, 9, 9, 256) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_17 (Conv2D) │ (None, 7, 7, 256) │ 590,080 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ flatten_1 (Flatten) │ (None, 12544) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dropout (Dropout) │ (None, 12544) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dense_1 (Dense) │ (None, 1) │ 12,545 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 991,041 (3.78 MB)
Trainable params: 991,041 (3.78 MB)
Non-trainable params: 0 (0.00 B)
Epoch 1/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 17s 73ms/step - accuracy: 0.5183 - loss: 1.2525 - val_accuracy: 0.5000 - val_loss: 0.7704 Epoch 2/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 16s 57ms/step - accuracy: 0.5553 - loss: 0.6872 - val_accuracy: 0.5910 - val_loss: 0.6516 Epoch 3/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 19s 48ms/step - accuracy: 0.6390 - loss: 0.6359 - val_accuracy: 0.6410 - val_loss: 0.6331 Epoch 4/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.6822 - loss: 0.6113 - val_accuracy: 0.7040 - val_loss: 0.5753 Epoch 5/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 19s 49ms/step - accuracy: 0.6947 - loss: 0.5824 - val_accuracy: 0.6000 - val_loss: 0.6699 Epoch 6/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.7121 - loss: 0.5787 - val_accuracy: 0.7190 - val_loss: 0.5424 Epoch 7/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 8s 44ms/step - accuracy: 0.7292 - loss: 0.5528 - val_accuracy: 0.7290 - val_loss: 0.5274 Epoch 8/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 13s 56ms/step - accuracy: 0.7385 - loss: 0.5360 - val_accuracy: 0.7460 - val_loss: 0.5079 Epoch 9/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 18s 43ms/step - accuracy: 0.7523 - loss: 0.5125 - val_accuracy: 0.7660 - val_loss: 0.4697 Epoch 10/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.7734 - loss: 0.4863 - val_accuracy: 0.7810 - val_loss: 0.4533 Epoch 11/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 9s 50ms/step - accuracy: 0.7923 - loss: 0.4527 - val_accuracy: 0.7630 - val_loss: 0.4973 Epoch 12/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 10s 47ms/step - accuracy: 0.8034 - loss: 0.4352 - val_accuracy: 0.8110 - val_loss: 0.4206 Epoch 13/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.8234 - loss: 0.4082 - val_accuracy: 0.7970 - val_loss: 0.4520 Epoch 14/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 19s 49ms/step - accuracy: 0.8228 - loss: 0.3924 - val_accuracy: 0.8010 - val_loss: 0.4603 Epoch 15/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.8407 - loss: 0.3798 - val_accuracy: 0.7740 - val_loss: 0.5394 Epoch 16/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 20s 53ms/step - accuracy: 0.8499 - loss: 0.3647 - val_accuracy: 0.7960 - val_loss: 0.4454 Epoch 17/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 10s 52ms/step - accuracy: 0.8654 - loss: 0.3373 - val_accuracy: 0.8030 - val_loss: 0.4956 Epoch 18/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 9s 44ms/step - accuracy: 0.8662 - loss: 0.3239 - val_accuracy: 0.8240 - val_loss: 0.4030 Epoch 19/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 13s 58ms/step - accuracy: 0.8817 - loss: 0.3062 - val_accuracy: 0.8240 - val_loss: 0.4875 Epoch 20/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 9s 46ms/step - accuracy: 0.8838 - loss: 0.2870 - val_accuracy: 0.8380 - val_loss: 0.4036 Epoch 21/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 10s 45ms/step - accuracy: 0.8990 - loss: 0.2534 - val_accuracy: 0.8570 - val_loss: 0.3579 Epoch 22/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.9148 - loss: 0.2377 - val_accuracy: 0.8540 - val_loss: 0.4551 Epoch 23/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 8s 44ms/step - accuracy: 0.9126 - loss: 0.2309 - val_accuracy: 0.8090 - val_loss: 0.5211 Epoch 24/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 49ms/step - accuracy: 0.9269 - loss: 0.2044 - val_accuracy: 0.8530 - val_loss: 0.4531 Epoch 25/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 57ms/step - accuracy: 0.9272 - loss: 0.1943 - val_accuracy: 0.8610 - val_loss: 0.5731 Epoch 26/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 20s 54ms/step - accuracy: 0.9393 - loss: 0.1899 - val_accuracy: 0.8630 - val_loss: 0.3983 Epoch 27/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 18s 44ms/step - accuracy: 0.9452 - loss: 0.1602 - val_accuracy: 0.8210 - val_loss: 0.5879 Epoch 28/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 49ms/step - accuracy: 0.9470 - loss: 0.1549 - val_accuracy: 0.8350 - val_loss: 0.5147 Epoch 29/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 12s 58ms/step - accuracy: 0.9482 - loss: 0.1501 - val_accuracy: 0.8280 - val_loss: 0.5716 Epoch 30/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 18s 48ms/step - accuracy: 0.9556 - loss: 0.1294 - val_accuracy: 0.8450 - val_loss: 0.5858 Epoch 31/50 188/188 ━━━━━━━━━━━━━━━━━━━━ 11s 56ms/step - accuracy: 0.9567 - loss: 0.1335 - val_accuracy: 0.8350 - val_loss: 0.5241 32/32 ━━━━━━━━━━━━━━━━━━━━ 2s 40ms/step - accuracy: 0.8249 - loss: 0.4601 Test accuracy: 0.842
Training Samples: 3000, Validation: 500, Test: 500 • Techniques: Added regularization, dropout, and data augmentation. • Performance: Achieved 85% accuracy. • Key Insight: Increasing the dataset size and using regularization improves performance and reduces overfitting.
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
Now, the third model will have 9000 training samples we will keep the same validation sample of 500, and a test sample of 500.
from tensorflow.keras.utils import image_dataset_from_directory
make_subset("train_3", start_index=0, end_index=9000)
make_subset("validation_3", start_index=9000, end_index=9500)
make_subset("test_3", start_index=9500, end_index=10000)
train_dataset = image_dataset_from_directory(
new_base_dir / "train_3",
image_size=(180, 180),
batch_size=32)
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation_3",
image_size=(180, 180),
batch_size=32)
test_dataset = image_dataset_from_directory(
new_base_dir / "test_3",
image_size=(180, 180),
batch_size=32)
Found 18000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Found 1000 files belonging to 2 classes.
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from keras.callbacks import EarlyStopping
from keras import regularizers
# Define early_stopping_monitor
# used early stopping to stop optimization when it isn't helping any more.
early_stopping_monitor = EarlyStopping(patience=10)
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
inputs = keras.Input(shape=(180, 180, 3))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu",kernel_regularizer = regularizers.l2(0.01))(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
model.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.keras",
save_best_only=True,
monitor="val_loss"), early_stopping_monitor
]
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
test_model = keras.models.load_model("convnet_from_scratch.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
Model: "functional_8"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ input_layer_11 (InputLayer) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ rescaling_5 (Rescaling) │ (None, 180, 180, 3) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_22 (Conv2D) │ (None, 178, 178, 32) │ 896 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_20 (MaxPooling2D) │ (None, 89, 89, 32) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_23 (Conv2D) │ (None, 87, 87, 64) │ 18,496 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_21 (MaxPooling2D) │ (None, 43, 43, 64) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_24 (Conv2D) │ (None, 41, 41, 128) │ 73,856 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_22 (MaxPooling2D) │ (None, 20, 20, 128) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_25 (Conv2D) │ (None, 18, 18, 256) │ 295,168 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ max_pooling2d_23 (MaxPooling2D) │ (None, 9, 9, 256) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ conv2d_26 (Conv2D) │ (None, 7, 7, 256) │ 590,080 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ flatten_2 (Flatten) │ (None, 12544) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dropout_1 (Dropout) │ (None, 12544) │ 0 │ ├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤ │ dense_2 (Dense) │ (None, 1) │ 12,545 │ └──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 991,041 (3.78 MB)
Trainable params: 991,041 (3.78 MB)
Non-trainable params: 0 (0.00 B)
Epoch 1/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 33s 53ms/step - accuracy: 0.5608 - loss: 0.9191 - val_accuracy: 0.5320 - val_loss: 0.7918 Epoch 2/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 27s 48ms/step - accuracy: 0.6873 - loss: 0.5999 - val_accuracy: 0.7610 - val_loss: 0.5210 Epoch 3/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 50ms/step - accuracy: 0.7503 - loss: 0.5254 - val_accuracy: 0.7800 - val_loss: 0.4846 Epoch 4/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 38s 44ms/step - accuracy: 0.7929 - loss: 0.4702 - val_accuracy: 0.7950 - val_loss: 0.4599 Epoch 5/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 43s 48ms/step - accuracy: 0.8210 - loss: 0.4309 - val_accuracy: 0.8120 - val_loss: 0.4219 Epoch 6/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 41s 49ms/step - accuracy: 0.8366 - loss: 0.3897 - val_accuracy: 0.8180 - val_loss: 0.4126 Epoch 7/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 25s 45ms/step - accuracy: 0.8561 - loss: 0.3510 - val_accuracy: 0.8840 - val_loss: 0.2882 Epoch 8/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 47ms/step - accuracy: 0.8729 - loss: 0.3212 - val_accuracy: 0.8730 - val_loss: 0.3229 Epoch 9/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 48ms/step - accuracy: 0.8882 - loss: 0.2954 - val_accuracy: 0.8430 - val_loss: 0.3787 Epoch 10/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 45ms/step - accuracy: 0.8953 - loss: 0.2765 - val_accuracy: 0.8920 - val_loss: 0.2730 Epoch 11/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 28s 50ms/step - accuracy: 0.9057 - loss: 0.2487 - val_accuracy: 0.9070 - val_loss: 0.2411 Epoch 12/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 27s 48ms/step - accuracy: 0.9221 - loss: 0.2273 - val_accuracy: 0.9030 - val_loss: 0.2531 Epoch 13/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 25s 45ms/step - accuracy: 0.9215 - loss: 0.2161 - val_accuracy: 0.9200 - val_loss: 0.2536 Epoch 14/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 28s 49ms/step - accuracy: 0.9249 - loss: 0.2138 - val_accuracy: 0.8320 - val_loss: 0.5285 Epoch 15/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 25s 45ms/step - accuracy: 0.9339 - loss: 0.1896 - val_accuracy: 0.8670 - val_loss: 0.4465 Epoch 16/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 40s 44ms/step - accuracy: 0.9395 - loss: 0.1873 - val_accuracy: 0.9160 - val_loss: 0.2473 Epoch 17/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 28s 49ms/step - accuracy: 0.9450 - loss: 0.1637 - val_accuracy: 0.9190 - val_loss: 0.2364 Epoch 18/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 46ms/step - accuracy: 0.9473 - loss: 0.1648 - val_accuracy: 0.9220 - val_loss: 0.2352 Epoch 19/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 41s 46ms/step - accuracy: 0.9461 - loss: 0.1607 - val_accuracy: 0.9060 - val_loss: 0.3237 Epoch 20/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 43s 50ms/step - accuracy: 0.9499 - loss: 0.1532 - val_accuracy: 0.8800 - val_loss: 0.3657 Epoch 21/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 47ms/step - accuracy: 0.9546 - loss: 0.1423 - val_accuracy: 0.9030 - val_loss: 0.3494 Epoch 22/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 26s 46ms/step - accuracy: 0.9551 - loss: 0.1422 - val_accuracy: 0.8960 - val_loss: 0.3244 Epoch 23/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 43s 50ms/step - accuracy: 0.9570 - loss: 0.1394 - val_accuracy: 0.8920 - val_loss: 0.3710 Epoch 24/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 38s 45ms/step - accuracy: 0.9558 - loss: 0.1423 - val_accuracy: 0.8780 - val_loss: 0.5197 Epoch 25/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 41s 45ms/step - accuracy: 0.9556 - loss: 0.1475 - val_accuracy: 0.9150 - val_loss: 0.2945 Epoch 26/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 43s 48ms/step - accuracy: 0.9607 - loss: 0.1359 - val_accuracy: 0.8970 - val_loss: 0.3919 Epoch 27/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 27s 48ms/step - accuracy: 0.9562 - loss: 0.1424 - val_accuracy: 0.9200 - val_loss: 0.3622 Epoch 28/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 45ms/step - accuracy: 0.9604 - loss: 0.1299 - val_accuracy: 0.9100 - val_loss: 0.2793 32/32 ━━━━━━━━━━━━━━━━━━━━ 3s 63ms/step - accuracy: 0.9065 - loss: 0.3009 Test accuracy: 0.909
• Training Samples: 9000, Validation: 500, Test: 500 • Techniques: Same model as Task 2, but using a larger training set. • Performance: Achieved 89.1% accuracy. • Key Insight: A significantly larger training set improved the model’s performance even further. However, increasing the training sample size can lead to diminishing returns after a certain point.
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
• Training Samples: 2000, Validation: 1000, Test: 1000 • Techniques: Used VGG16 pretrained network with fine-tuning and data augmentation. • Performance: Achieved 91.2% accuracy. • Key Insight: Using a pretrained model like VGG16 with fine-tuning significantly improves performance, even with a relatively small training sample size.
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
# Instantiating the VGG16 convolutional base
conv_base = keras.applications.vgg16.VGG16(
weights="imagenet",
include_top=False)
# Freezing all layers until the fourth from the last
conv_base.trainable = True
for layer in conv_base.layers[:-4]:
layer.trainable = False
# Adding a data augmentation stage and a classifier to the convolutional base
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = keras.applications.vgg16.preprocess_input(x)
x = conv_base(x)
x = layers.Flatten()(x)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model_pre_trained_1 = keras.Model(inputs, outputs)
# Fine-tuning the model
model_pre_trained_1.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-6),
metrics=["accuracy"])
# Used early stopping to stop optimization
early_stopping_monitor = EarlyStopping(patience=10)
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.keras",
save_best_only=True,
monitor="val_loss"), early_stopping_monitor
]
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
plt.show()
test_model = keras.models.load_model("convnet_from_scratch.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 58889256/58889256 ━━━━━━━━━━━━━━━━━━━━ 4s 0us/step Epoch 1/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 25s 45ms/step - accuracy: 0.9625 - loss: 0.1327 - val_accuracy: 0.8640 - val_loss: 0.6049 Epoch 2/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 43s 48ms/step - accuracy: 0.9622 - loss: 0.1296 - val_accuracy: 0.9210 - val_loss: 0.3329 Epoch 3/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 27s 47ms/step - accuracy: 0.9667 - loss: 0.1247 - val_accuracy: 0.9020 - val_loss: 0.3883 Epoch 4/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 44ms/step - accuracy: 0.9678 - loss: 0.1127 - val_accuracy: 0.9100 - val_loss: 0.4177 Epoch 5/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 27s 49ms/step - accuracy: 0.9664 - loss: 0.1355 - val_accuracy: 0.8990 - val_loss: 0.3045 Epoch 6/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 26s 47ms/step - accuracy: 0.9650 - loss: 0.1418 - val_accuracy: 0.9160 - val_loss: 0.4115 Epoch 7/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 44ms/step - accuracy: 0.9674 - loss: 0.1202 - val_accuracy: 0.9140 - val_loss: 0.3955 Epoch 8/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 28s 49ms/step - accuracy: 0.9636 - loss: 0.1390 - val_accuracy: 0.9260 - val_loss: 0.3393 Epoch 9/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 45ms/step - accuracy: 0.9685 - loss: 0.1267 - val_accuracy: 0.9260 - val_loss: 0.3884 Epoch 10/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 41s 45ms/step - accuracy: 0.9686 - loss: 0.1166 - val_accuracy: 0.9150 - val_loss: 0.3226 Epoch 11/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 47ms/step - accuracy: 0.9697 - loss: 0.1346 - val_accuracy: 0.9180 - val_loss: 0.3311 Epoch 12/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 49ms/step - accuracy: 0.9688 - loss: 0.1153 - val_accuracy: 0.8450 - val_loss: 1.2947 Epoch 13/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 39s 45ms/step - accuracy: 0.9697 - loss: 0.1239 - val_accuracy: 0.8850 - val_loss: 0.6911 Epoch 14/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 41s 44ms/step - accuracy: 0.9673 - loss: 0.1259 - val_accuracy: 0.9100 - val_loss: 0.3926 Epoch 15/50 563/563 ━━━━━━━━━━━━━━━━━━━━ 42s 47ms/step - accuracy: 0.9675 - loss: 0.1212 - val_accuracy: 0.9140 - val_loss: 0.4096
32/32 ━━━━━━━━━━━━━━━━━━━━ 2s 46ms/step - accuracy: 0.9038 - loss: 0.2550 Test accuracy: 0.899
# Plotting the results
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
Pretrained Model 2: ResNet50V2 convolutional base
import os
import shutil
import pathlib
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint
original_dir = pathlib.Path("train")
new_base_dir = pathlib.Path("cats_vs_dogs_small_3")
def make_subset(subset_name, start_index, end_index):
for category in ("cat", "dog"):
dir = new_base_dir / subset_name / category
os.makedirs(dir, exist_ok=True)
fnames = [f"{category}.{i}.jpg" for i in range(start_index, end_index)]
for fname in fnames:
shutil.copyfile(src=original_dir / fname, dst=dir / fname)
make_subset("validation", start_index=0, end_index=500)
make_subset("test", start_index=500, end_index=1000)
make_subset("train", start_index=1000, end_index=5000)
train_dataset = tf.keras.utils.image_dataset_from_directory(
new_base_dir / "train",
image_size=(180, 180),
batch_size=32)
validation_dataset = tf.keras.utils.image_dataset_from_directory(
new_base_dir / "validation",
image_size=(180, 180),
batch_size=32)
test_dataset = tf.keras.utils.image_dataset_from_directory(
new_base_dir / "test",
image_size=(180, 180),
batch_size=32)
model = Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(180, 180, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
callbacks = [
ModelCheckpoint(
filepath="convnet_from_scratch_with_augmentation_4000.keras",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=20,
validation_data=validation_dataset,
callbacks=callbacks)
Found 8000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Epoch 1/20
/usr/local/lib/python3.10/dist-packages/keras/src/layers/convolutional/base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(activity_regularizer=activity_regularizer, **kwargs)
250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 60ms/step - accuracy: 0.5354 - loss: 13.3029 - val_accuracy: 0.5790 - val_loss: 0.6715 Epoch 2/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 13s 50ms/step - accuracy: 0.5971 - loss: 0.6655 - val_accuracy: 0.6460 - val_loss: 0.6273 Epoch 3/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 52ms/step - accuracy: 0.6428 - loss: 0.6353 - val_accuracy: 0.6910 - val_loss: 0.5888 Epoch 4/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 51ms/step - accuracy: 0.6875 - loss: 0.6003 - val_accuracy: 0.6790 - val_loss: 0.6119 Epoch 5/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 48ms/step - accuracy: 0.7146 - loss: 0.5598 - val_accuracy: 0.7300 - val_loss: 0.5443 Epoch 6/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 52ms/step - accuracy: 0.7722 - loss: 0.4818 - val_accuracy: 0.7070 - val_loss: 0.5901 Epoch 7/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 49ms/step - accuracy: 0.8028 - loss: 0.4253 - val_accuracy: 0.7310 - val_loss: 0.5725 Epoch 8/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 50ms/step - accuracy: 0.8444 - loss: 0.3512 - val_accuracy: 0.7340 - val_loss: 0.6122 Epoch 9/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 53ms/step - accuracy: 0.8718 - loss: 0.2862 - val_accuracy: 0.7400 - val_loss: 0.6660 Epoch 10/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 19s 49ms/step - accuracy: 0.9125 - loss: 0.2137 - val_accuracy: 0.7210 - val_loss: 0.8016 Epoch 11/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 50ms/step - accuracy: 0.9346 - loss: 0.1742 - val_accuracy: 0.7270 - val_loss: 0.8152 Epoch 12/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 11s 43ms/step - accuracy: 0.9470 - loss: 0.1552 - val_accuracy: 0.7490 - val_loss: 0.9162 Epoch 13/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 13s 51ms/step - accuracy: 0.9589 - loss: 0.1147 - val_accuracy: 0.7270 - val_loss: 1.1159 Epoch 14/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 49ms/step - accuracy: 0.9664 - loss: 0.0892 - val_accuracy: 0.6980 - val_loss: 1.2721 Epoch 15/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 11s 44ms/step - accuracy: 0.9580 - loss: 0.1253 - val_accuracy: 0.6900 - val_loss: 1.3668 Epoch 16/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 52ms/step - accuracy: 0.9773 - loss: 0.0741 - val_accuracy: 0.7430 - val_loss: 1.2645 Epoch 17/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 53ms/step - accuracy: 0.9827 - loss: 0.0526 - val_accuracy: 0.7210 - val_loss: 1.4190 Epoch 18/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 18s 42ms/step - accuracy: 0.9571 - loss: 0.1507 - val_accuracy: 0.7590 - val_loss: 1.0564 Epoch 19/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 12s 49ms/step - accuracy: 0.9729 - loss: 0.0816 - val_accuracy: 0.7430 - val_loss: 1.2828 Epoch 20/20 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 46ms/step - accuracy: 0.9856 - loss: 0.0476 - val_accuracy: 0.7380 - val_loss: 1.4537
# Plotting the results
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
Pretrained Model 3: MobileNetV2
Task 4 - ResNet50V2 Summary:
• Training Samples: 4000, Validation: 500, Test: 500 • Techniques: Used ResNet50V2 pretrained network and a simple CNN on top. • Performance: Achieved 60% accuracy. • Key Insight: Pretrained ResNet50V2 underperformed due to suboptimal training setup or the need for further fine-tuning.
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
# Instantiating the MobileNetV2 convolutional base
conv_base = keras.applications.MobileNetV2(
weights="imagenet",
include_top=False)
# Freezing all layers until fourth from the last
conv_base.trainable = True
for layer in conv_base.layers[:-4]:
layer.trainable = False
# Adding a data augmentation stage and a classifier to the convolutional base
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = keras.applications.mobilenet_v2.preprocess_input(x)
x = conv_base(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model_pre_trained_2 = keras.Model(inputs, outputs)
# Fine-tuning the model
model_pre_trained_2.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-6),
metrics=["accuracy"])
# Used early stopping to stop optimization
early_stopping_monitor = EarlyStopping(patience=10)
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch_2.keras",
save_best_only=True,
monitor="val_loss"), early_stopping_monitor
]
history = model_pre_trained_2.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
plt.show()
# Evaluate the model on the test set
test_model = keras.models.load_model("convnet_from_scratch_2.keras")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc:.3f}")
<ipython-input-75-f9dfafc74a43>:6: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default. conv_base = keras.applications.MobileNetV2(
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v2/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5 9406464/9406464 ━━━━━━━━━━━━━━━━━━━━ 2s 0us/step Epoch 1/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 24s 65ms/step - accuracy: 0.5451 - loss: 0.8980 - val_accuracy: 0.8230 - val_loss: 0.4019 Epoch 2/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 49ms/step - accuracy: 0.7006 - loss: 0.6096 - val_accuracy: 0.9080 - val_loss: 0.2487 Epoch 3/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 56ms/step - accuracy: 0.7963 - loss: 0.4468 - val_accuracy: 0.9330 - val_loss: 0.1823 Epoch 4/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 54ms/step - accuracy: 0.8525 - loss: 0.3656 - val_accuracy: 0.9510 - val_loss: 0.1476 Epoch 5/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 13s 51ms/step - accuracy: 0.8782 - loss: 0.3041 - val_accuracy: 0.9570 - val_loss: 0.1267 Epoch 6/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 56ms/step - accuracy: 0.8951 - loss: 0.2679 - val_accuracy: 0.9640 - val_loss: 0.1124 Epoch 7/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9057 - loss: 0.2391 - val_accuracy: 0.9680 - val_loss: 0.1022 Epoch 8/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 56ms/step - accuracy: 0.9174 - loss: 0.2157 - val_accuracy: 0.9700 - val_loss: 0.0942 Epoch 9/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 19s 51ms/step - accuracy: 0.9195 - loss: 0.2116 - val_accuracy: 0.9720 - val_loss: 0.0883 Epoch 10/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 57ms/step - accuracy: 0.9216 - loss: 0.2033 - val_accuracy: 0.9740 - val_loss: 0.0836 Epoch 11/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9270 - loss: 0.1854 - val_accuracy: 0.9750 - val_loss: 0.0796 Epoch 12/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 12s 50ms/step - accuracy: 0.9367 - loss: 0.1757 - val_accuracy: 0.9770 - val_loss: 0.0763 Epoch 13/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 57ms/step - accuracy: 0.9346 - loss: 0.1703 - val_accuracy: 0.9760 - val_loss: 0.0732 Epoch 14/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 55ms/step - accuracy: 0.9408 - loss: 0.1608 - val_accuracy: 0.9770 - val_loss: 0.0709 Epoch 15/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 53ms/step - accuracy: 0.9434 - loss: 0.1584 - val_accuracy: 0.9790 - val_loss: 0.0689 Epoch 16/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 56ms/step - accuracy: 0.9430 - loss: 0.1521 - val_accuracy: 0.9790 - val_loss: 0.0673 Epoch 17/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 56ms/step - accuracy: 0.9453 - loss: 0.1451 - val_accuracy: 0.9800 - val_loss: 0.0653 Epoch 18/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 56ms/step - accuracy: 0.9431 - loss: 0.1512 - val_accuracy: 0.9800 - val_loss: 0.0641 Epoch 19/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 56ms/step - accuracy: 0.9427 - loss: 0.1633 - val_accuracy: 0.9810 - val_loss: 0.0630 Epoch 20/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 56ms/step - accuracy: 0.9437 - loss: 0.1460 - val_accuracy: 0.9810 - val_loss: 0.0616 Epoch 21/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 55ms/step - accuracy: 0.9426 - loss: 0.1529 - val_accuracy: 0.9810 - val_loss: 0.0606 Epoch 22/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 54ms/step - accuracy: 0.9476 - loss: 0.1337 - val_accuracy: 0.9820 - val_loss: 0.0602 Epoch 23/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 56ms/step - accuracy: 0.9482 - loss: 0.1440 - val_accuracy: 0.9810 - val_loss: 0.0594 Epoch 24/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 19s 48ms/step - accuracy: 0.9454 - loss: 0.1500 - val_accuracy: 0.9820 - val_loss: 0.0579 Epoch 25/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 55ms/step - accuracy: 0.9439 - loss: 0.1516 - val_accuracy: 0.9810 - val_loss: 0.0572 Epoch 26/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 55ms/step - accuracy: 0.9452 - loss: 0.1475 - val_accuracy: 0.9820 - val_loss: 0.0566 Epoch 27/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 56ms/step - accuracy: 0.9483 - loss: 0.1436 - val_accuracy: 0.9820 - val_loss: 0.0561 Epoch 28/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 15s 59ms/step - accuracy: 0.9532 - loss: 0.1297 - val_accuracy: 0.9800 - val_loss: 0.0561 Epoch 29/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 57ms/step - accuracy: 0.9529 - loss: 0.1235 - val_accuracy: 0.9790 - val_loss: 0.0557 Epoch 30/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9494 - loss: 0.1368 - val_accuracy: 0.9810 - val_loss: 0.0547 Epoch 31/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 56ms/step - accuracy: 0.9516 - loss: 0.1283 - val_accuracy: 0.9800 - val_loss: 0.0545 Epoch 32/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 13s 53ms/step - accuracy: 0.9507 - loss: 0.1389 - val_accuracy: 0.9820 - val_loss: 0.0537 Epoch 33/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 12s 49ms/step - accuracy: 0.9591 - loss: 0.1214 - val_accuracy: 0.9800 - val_loss: 0.0533 Epoch 34/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 55ms/step - accuracy: 0.9505 - loss: 0.1335 - val_accuracy: 0.9790 - val_loss: 0.0536 Epoch 35/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 58ms/step - accuracy: 0.9527 - loss: 0.1213 - val_accuracy: 0.9800 - val_loss: 0.0532 Epoch 36/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 55ms/step - accuracy: 0.9551 - loss: 0.1268 - val_accuracy: 0.9800 - val_loss: 0.0533 Epoch 37/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 55ms/step - accuracy: 0.9562 - loss: 0.1215 - val_accuracy: 0.9800 - val_loss: 0.0527 Epoch 38/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9539 - loss: 0.1359 - val_accuracy: 0.9800 - val_loss: 0.0523 Epoch 39/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 18s 47ms/step - accuracy: 0.9554 - loss: 0.1211 - val_accuracy: 0.9800 - val_loss: 0.0518 Epoch 40/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 54ms/step - accuracy: 0.9540 - loss: 0.1242 - val_accuracy: 0.9800 - val_loss: 0.0522 Epoch 41/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 55ms/step - accuracy: 0.9526 - loss: 0.1314 - val_accuracy: 0.9800 - val_loss: 0.0519 Epoch 42/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 61ms/step - accuracy: 0.9545 - loss: 0.1186 - val_accuracy: 0.9800 - val_loss: 0.0517 Epoch 43/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 12s 49ms/step - accuracy: 0.9601 - loss: 0.1128 - val_accuracy: 0.9810 - val_loss: 0.0513 Epoch 44/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 22s 54ms/step - accuracy: 0.9568 - loss: 0.1196 - val_accuracy: 0.9810 - val_loss: 0.0514 Epoch 45/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 55ms/step - accuracy: 0.9576 - loss: 0.1298 - val_accuracy: 0.9810 - val_loss: 0.0511 Epoch 46/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9501 - loss: 0.1304 - val_accuracy: 0.9810 - val_loss: 0.0503 Epoch 47/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 57ms/step - accuracy: 0.9557 - loss: 0.1268 - val_accuracy: 0.9830 - val_loss: 0.0500 Epoch 48/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 20s 55ms/step - accuracy: 0.9566 - loss: 0.1283 - val_accuracy: 0.9820 - val_loss: 0.0500 Epoch 49/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 14s 54ms/step - accuracy: 0.9610 - loss: 0.1128 - val_accuracy: 0.9820 - val_loss: 0.0504 Epoch 50/50 250/250 ━━━━━━━━━━━━━━━━━━━━ 21s 55ms/step - accuracy: 0.9576 - loss: 0.1160 - val_accuracy: 0.9830 - val_loss: 0.0499
32/32 ━━━━━━━━━━━━━━━━━━━━ 3s 39ms/step - accuracy: 0.9745 - loss: 0.0489 Test accuracy: 0.979
# Plotting the results
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
Task 4 - MobileNetV2 Summary:
• Training Samples: 4000, Validation: 500, Test: 500 • Techniques: Used MobileNetV2 pretrained network with fine-tuning and data augmentation. • Performance: Achieved 98.6% accuracy. • Key Insight: MobileNetV2 achieved the best performance due to its lightweight architecture and effective fine-tuning.
Overall Conclusion: